Adversarial classification via distributional robustness with Wasserstein ambiguity

نویسندگان

چکیده

Abstract We study a model for adversarial classification based on distributionally robust chance constraints. show that under Wasserstein ambiguity, the aims to minimize conditional value-at-risk of distance misclassification, and we explore links models proposed earlier maximum-margin classifiers. also provide reformulation linear classification, it is equivalent minimizing regularized ramp loss objective. Numerical experiments that, despite nonconvexity this formulation, standard descent methods appear converge global minimizer problem. Inspired by observation, certain class distributions, only stationary point minimization problem minimizer.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Certifiable Distributional Robustness with Principled Adversarial Training

Neural networks are vulnerable to adversarial examples and researchers have proposed manyheuristic attack and defense mechanisms. We take the principled view of distributionally ro-bust optimization, which guarantees performance under adversarial input perturbations. Byconsidering a Lagrangian penalty formulation of perturbation of the underlying data distribu-tion in a Wasserst...

متن کامل

Wasserstein Distributional Robustness and Regularization in Statistical Learning

A central question in statistical learning is to design algorithms that not only perform well on training data, but also generalize to new and unseen data. In this paper, we tackle this question by formulating a distributionally robust stochastic optimization (DRSO) problem, which seeks a solution that minimizes the worstcase expected loss over a family of distributions that are close to the em...

متن کامل

Certifying Some Distributional Robustness with Principled Adversarial Training

Neural networks are vulnerable to adversarial examples and researchers have proposed many heuristic attack and defense mechanisms. We address this problem through the principled lens of distributionally robust optimization, which guarantees performance under adversarial input perturbations. By considering a Lagrangian penalty formulation of perturbing the underlying data distribution in a Wasse...

متن کامل

Wasserstein Generative Adversarial Network

Recent advances in deep generative models give us new perspective on modeling highdimensional, nonlinear data distributions. Especially the GAN training can successfully produce sharp, realistic images. However, GAN sidesteps the use of traditional maximum likelihood learning and instead adopts an two-player game approach. This new training behaves very differently compared to ML learning. Ther...

متن کامل

Wasserstein Generative Adversarial Networks

We introduce a new algorithm named WGAN, an alternative to traditional GAN training. In this new model, we show that we can improve the stability of learning, get rid of problems like mode collapse, and provide meaningful learning curves useful for debugging and hyperparameter searches. Furthermore, we show that the corresponding optimization problem is sound, and provide extensive theoretical ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Mathematical Programming

سال: 2022

ISSN: ['0025-5610', '1436-4646']

DOI: https://doi.org/10.1007/s10107-022-01796-6